# Mixed Precision Quantization
Qwen3 235B A22B Mixed 3 6bit
Apache-2.0
This is a mixed 3-6bit quantized version converted from the Qwen/Qwen3-235B-A22B model, optimized for efficient inference on the Apple MLX framework.
Large Language Model
Q
mlx-community
100
2
FLUX.1 Dev Q8 Fp16 Fp32 Mix 8 To 32 Bpw Gguf
Other
Experimental GGUF-converted version of Flux.1-dev, featuring various mixed-precision quantization schemes
Text-to-Image
F
mo137
257
12
Featured Recommended AI Models